36 research outputs found

    Tensors in modelling multi-particle interactions

    Full text link
    In this work we present recent results on application of low-rank tensor decompositions to modelling of aggregation kinetics taking into account multi-particle collisions (for three and more particles). Such kinetics can be described by system of nonlinear differential equations with right-hand side requiring NDN^D operations for its straight-forward evaluation, where NN is number of particles size classes and DD is number of particles colliding simultaneously. Such a complexity can be significantly reduced by application low rank tensor decompositions (either Tensor Train or Canonical Polyadic) to acceleration of evaluation of sums and convolutions from right-hand side. Basing on this drastic reduction of complexity for evaluation of right-hand side we further utilize standard second order Runge-Kutta time integration scheme and demonstrate that our approach allows to obtain numerical solutions of studied equations with very high accuracy in modest times. We also show preliminary results on parallel scalability of novel approach and conclude that it can be efficiently utilized with use of supercomputers.Comment: LaTEX, 8 pages, 3 figures, submitted to proceedings of LSSC'19 conference, Sozopol, Bulgari

    Neural Networks Compression for Language Modeling

    Full text link
    In this paper, we consider several compression techniques for the language modeling problem based on recurrent neural networks (RNNs). It is known that conventional RNNs, e.g, LSTM-based networks in language modeling, are characterized with either high space complexity or substantial inference time. This problem is especially crucial for mobile applications, in which the constant interaction with the remote server is inappropriate. By using the Penn Treebank (PTB) dataset we compare pruning, quantization, low-rank factorization, tensor train decomposition for LSTM networks in terms of model size and suitability for fast inference.Comment: Keywords: LSTM, RNN, language modeling, low-rank factorization, pruning, quantization. Published by Springer in the LNCS series, 7th International Conference on Pattern Recognition and Machine Intelligence, 201

    Low rank approximation of multidimensional data

    Get PDF
    In the last decades, numerical simulation has experienced tremendous improvements driven by massive growth of computing power. Exascale computing has been achieved this year and will allow solving ever more complex problems. But such large systems produce colossal amounts of data which leads to its own difficulties. Moreover, many engineering problems such as multiphysics or optimisation and control, require far more power that any computer architecture could achieve within the current scientific computing paradigm. In this chapter, we propose to shift the paradigm in order to break the curse of dimensionality by introducing decomposition to reduced data. We present an extended review of data reduction techniques and intends to bridge between applied mathematics community and the computational mechanics one. The chapter is organized into two parts. In the first one bivariate separation is studied, including discussions on the equivalence of proper orthogonal decomposition (POD, continuous framework) and singular value decomposition (SVD, discrete matrices). Then, in the second part, a wide review of tensor formats and their approximation is proposed. Such work has already been provided in the literature but either on separate papers or into a pure applied mathematics framework. Here, we offer to the data enthusiast scientist a description of Canonical, Tucker, Hierarchical and Tensor train formats including their approximation algorithms. When it is possible, a careful analysis of the link between continuous and discrete methods will be performed.IV Research and Transfer Plan of the University of SevillaInstitut CarnotJunta de Andaluc铆aIDEX program of the University of Bordeau

    Tensor Product Approximation (DMRG) and Coupled Cluster method in Quantum Chemistry

    Full text link
    We present the Copupled Cluster (CC) method and the Density matrix Renormalization Grooup (DMRG) method in a unified way, from the perspective of recent developments in tensor product approximation. We present an introduction into recently developed hierarchical tensor representations, in particular tensor trains which are matrix product states in physics language. The discrete equations of full CI approximation applied to the electronic Schr\"odinger equation is casted into a tensorial framework in form of the second quantization. A further approximation is performed afterwards by tensor approximation within a hierarchical format or equivalently a tree tensor network. We establish the (differential) geometry of low rank hierarchical tensors and apply the Driac Frenkel principle to reduce the original high-dimensional problem to low dimensions. The DMRG algorithm is established as an optimization method in this format with alternating directional search. We briefly introduce the CC method and refer to our theoretical results. We compare this approach in the present discrete formulation with the CC method and its underlying exponential parametrization.Comment: 15 pages, 3 figure

    Approximating turbulent and non-turbulent events with the Tensor Train decomposition method

    Get PDF
    Low-rank multilevel approximation methods are often suited to attack high-dimensional problems successfully and they allow very compact representation of large data sets. Specifically, hierarchical tensor product decomposition methods, e.g., the Tree-Tucker format and the Tensor Train format emerge as a promising approach for application to data that are concerned with cascade-of-scales problems as, e.g., in turbulent fluid dynamics. Beyond multilinear mathematics, those tensor formats are also successfully applied in e.g., physics or chemistry, where they are used in many body problems and quantum states. Here, we focus on two particular objectives, that is, we aim at capturing self-similar structures that might be hidden in the data and we present the reconstruction capabilities of the Tensor Train decomposition method tested with 3D channel turbulence flow data

    Geometric methods on low-rank matrix and tensor manifolds

    Get PDF
    In this chapter we present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors
    corecore